major concern
Below we address some major concerns
We thank the reviewers for their constructive feedback. We will improve the presentation according to the suggestions. Below we address some major concerns. Q1 [R1]: Does this work generalize to non-Euclidean domains with arbitrary distance measures? Q2 [R1]: In terms of the name, the proposed work is more "geometric" than "topological".
Review for NeurIPS paper: Language and Visual Entity Relationship Graph for Agent Navigation
Weaknesses: - The proposed method is tailored for VLN and may limit its generalization to other domains (it is not new for other vision-and-language tasks). If the same h_t and u are feed into the three attentions, how could different contexts be learned? There seems to be something wrong, either the technique or the notations. However, VLN models may be sensitive to hyper-parameter tuning. It would be better if the authors can demonstrate the mean and standard deviation of multiple runs. In what cases the proposed model would fail?
Reviews: Modeling Tabular data using Conditional GAN
Originality: The main originality of the paper is a data transformation process applied to tabular data so a GAN can learn from them. This is definitely higher novel and can be potentially useful in similar situations involving such distributions. Apart from this, however, I feel that the authors are overclaiming a bit regarding several challenge/contributions: -C2 (L86): The choice of activation function certainly depends on the data format, listing that as a "challenge" seems a bit too much to me, unless the authors can point out non-trivial adaptations they made to address the problem (and apologize if I missed that...) -C4 (L98): again, hardly something new -C5 (L105): mode collapse is certainly well studied in literature (speaking of which, the authors should add references on newer approaches such as BourGAN), using an off-the-shelf solution (PacGAN), again, does not seem to me as an important contribution. Rephrasing the section and focus on the important contributions (C3, and perhaps C1) will make the contributions of the paper more clear, in my opinion. Quality: The paper is of high quality and the description of techniques is sound.
Review for NeurIPS paper: Structured Prediction for Conditional Meta-Learning
Especially, more task conditioning methods (e.g., MMAML) are considered in this paper. However, my major concern has not been addressed. The authors still ignore the discussion with multi-task learning. From my perspective, the goal for meta-learning is to generalize knowledge from previous tasks, which further benefits the training of a new task. The setting in this paper allows a new meta-testing task to access all meta-training tasks.
Review for NeurIPS paper: A Combinatorial Perspective on Transfer Learning
This paper studies continual learning that does not require task boundary and identity information and proposes a novel model ensemble method from the combinatorial perspective for this problem. All reviewers and AC agree that this paper builds a novel and promising direction. Authors also design delicate algorithm by introducing the non-stationary learning techniques to solve this problem. The experimental results of this method are somewhat weak in several aspects, but given the challenge of online continual learning in nature, they are fairly convincing to justify the main ideas and proposed methods. Note that after rebuttal and discussion phases, there still remain several major concerns: First, the empirical evaluation is not realistic in terms of task diversity and scalability.